|
In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for "maximum-margin" classification, most notably for support vector machines (SVMs). For an intended output and a classifier score , the hinge loss of the prediction is defined as : Note that should be the "raw" output of the classifier's decision function, not the predicted class label. For instance, in linear SVMs, , where are the parameters of the hyperplane and is the point to classify. It can be seen that when and have the same sign (meaning predicts the right class) and , the hinge loss , but when they have opposite sign, increases linearly with (one-sided error). ==Extensions== While SVMs are commonly extended to multiclass classification in a one-vs.-all or one-vs.-one fashion, there exists a "true" multiclass version of the hinge loss due to Crammer and Singer, defined for a linear classifier as : In structured prediction, the hinge loss can be further extended to structured output spaces. Structured SVMs with margin rescaling use the following variant, where denotes the SVM's parameters, the joint feature function, and the Hamming loss: : 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「hinge loss」の詳細全文を読む スポンサード リンク
|